Video surveillance (CCTV) is a technology that is nowadays deeply woven into the everyday life of many people as one tends to expect it in many varied circumstances (Ossola, 2019). The rationale behind the installation of these systems seems to be very clear for governments. For example, on Buffalo’s (NY) open data website, one can read that “the City of Buffalo deploys a real-time, citywide video surveillance system to augment the public safety efforts of the Buffalo Police Department”. Yet, the development of this new technology, is not exempt from any controversy. For instance, many observers claim that the expansion of video surveillance poses an unregulated threat to privacy (ACLU, 2021). Still, many people seem to be willing to accept this loss in privacy as the surge in video surveillance makes them feel safer (Madden & Rainie, 2015).
Throughout this research, we challenge the widespread belief that people who have “nothing to hide” should be content with the expansion of CCTV networks as the latter makes them safer (Madden & Rainie, 2015). Indeed, on top of many privacy issues linked with this surge in video surveillance systems, one might legitimately ask the question whether these cameras actually make people safer?
The goal of this project in the first phase is to investigate the crime deterrent potential of CCTVs in an Amercian city. This potential will also be compared to the different types of crime that are committed in this area. Over a second phase, the dispersion of CCTVs within the city will be investigated. Indeed, according to some researches, mass surveillance has a stronger impact on communities already disadvantaged by their poverty, race, religion, ethnicity, or immigration status (Gellman & Adler-Bell, 2017). We would like to see whether our data enables us to validate or invalidate this theory. It would also be extremely interesting, even though challenging, to see whether the installation of surveillance systems could potentially create even more pernicious issues such as crime displacements (Waples, Gill & Fisher, 2009).
In sum we argue that, in a world where CCTVs and other surveillance systems are flourishing, it might be beneficial to take a step back and question both the efficacy and the implementation design of such technologies, since they are often portrayed by different stakeholders as miraculous solutions to very complex issues.
Augustin: Augustin obtained a degree in Business Administration at the University of St-Gallen where he had the opportunity to develop a strong interest in digital business ethics. He wrote his bachelor’s thesis on the privacy implications of the use of fear appeals in home surveillance devices’ marketing strategy.
Marine: Marine made a bachelor in Law at the UBO (Université de Bretagne-Occidentale). She is presently into the Master DCS (Droit, Criminalité et Sécurité des technologies de l’information) at the Unversity of Lausanne. Last year, she had the opportunity to take a data protection course and learn more about cyber security and crime in general.
Daniel: Daniel is an exchange student from Koblenz, Germany. Daniel obtained a bachelor’s degree in Business Administration/Management at the WHU - Otto Beisheim School of Management, Germany. He is currently pursuing a Master of Management with focusing on family businesses, entrepreneurship and data science in his courses. Interestingly regarding this project, Daniel spend several months in the United states after high school and thus he can relate to the topic about police violence and crimes in the US.
Firstly, from our respective backgrounds, we derive a strong interest in new technologies and privacy. We believe that every person is entitled to the fundamental right to privacy. Unfortunately, one observes an increasing tendency of governments and other stakeholders (e.g. businesses such as GAFA (Google, Amazon, Facebook, Apple)) to take more and more control in our daily lives through digital technologies such as cameras, computers or smartphones. For these reasons it is interesting to ask ourselves if this massive collection of our data leads to more security or more restrictions of our freedom.
Secondly, if we look at European law like the GDPR, collection and processing of our data must be proportionate to the purpose of that processing. Therefore, it is of our interest to determine if these applications are the same in the United States and to see if the installation of cameras, with the objective of security, really allows to reduce crime and to make a city more secure.
Thirdly, it must also be said that crime and the legislative discussions regarding the right to wear a gun in the United-States are fascinating. At first, it seems as if the freedom to carry a gun makes the US more prone to crimes such as mass shootings. To verify or falsify our hypotheses, we also want to see through the datasets we obtained, what kind of crime prevails in American cities and how it evolves according to the districts and their particularities.
We have four raw data sets. All data sets were retrieved on Baltimore government’s open data portal. We found data about crimes committed in Baltimore, CCTV location in the city and poverty rates. We also found a data set showing the reference boundaries of the Community Statistical Area geographies. The latter will certainly be helpful to match each data set’s observations together.
This dataset represents the location and characteristics of major crime against persons such as homicide, shooting, robbery, aggrevated assault etc. within the city of Baltimore. This data set contains 350’294 observations.
RowID = ID of the row, 350’294 in total
CrimeDateTime = date and time of the crime. Format yyyy/mm/dd hh:mm:sstzd
CrimeCode = Code corresponding to the type of crime committed
Location = Textual information on where the crime was committed
Description = Textual description of the crime committed corresponding to a CrimeCode.
Inside/Outside = Provides information on whether crime was committed inside or outside
Weapon = Provides details on what weapon has been used, if any
Post = Number corresponding to the Police Post concerned. A map with corresponding police posts can be found here: http://moit.baltimorecity.gov/sites/default/files/police_districts_w_posts.pdf?__cf_chl_captcha_tk__=pmd_NhnE710SS8QEWdKOyT5Ug6IJZGoF6iIntFYY30vctes-1634309136-0-gqNtZGzNAxCjcnBszQPl
District = Name of the district, regrouping different neighbourhoods. Baltimore is officially divided into nine geographical regions: North, Northeast, East, Southeast, South, Southwest, West, Northwest, and Central.
Neighborhood = Name of the neighborhood in which the crime was committed. Most names matches with neighborhood names contained in the dataset about Community Statistical Areas.
Latitude = Latitude, Coordinate system: EPSG:4326 WGS 84
Longitude = Longitude, Coordinate system: EPSG:4326 WGS 84
GeoLocation = Combination of latitude and longitude, Coordinate system: EPSG:4326 WGS 84
Premise = Information on the premise where the crime was committed. One counts more than 120’000 observations in the streets.
crime_data <- read.csv(file = here::here("data/Baltimore_Part1_Crime_data.csv"))Source of the data set: [https://data.baltimorecity.gov/datasets/part1-crime-data/explore]
This dataset represents closed circuit camera locations capturing activity within 256ft (~2 blocks). It contains 837 observations in total.
X = Longitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator
Y = Latitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator
OBJECTID = ID of of the camera, 837 in total
CAM_NUM = Unique number attributed to the camera. This might suggest that the data set does not show the location of every camera in Baltimore. Here at this point we want to mentioned that the CAM_NUM column has many zeros, which we couldn’t relate to anything. So we are still in the process of figuring out the exact meaning of that.
LOCATION = Textual information on where the camera is located
PROJ = Name of the area in which the camera is located. It does not always match the name of the “standard” community statistical areas.
XCCORD = Longitude, Coordinate system: EPSG:4326 WGS 84
YCOORD = Latitude, Coordinate system: EPSG:4326 WGS 84
cctv_data <- read.csv(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))Source of the data set: [https://data.baltimorecity.gov/datasets/cctv-locations-crime-cameras/explore]
This dataset provides information about the percent of family households living below the poverty line. This indicator measures the percentage of households whose income fell below the poverty threshold out of all households in an area.
Federal and state governments use such estimates to allocate funds to local communities. Local communities use these estimates to identify the number of individuals or families eligible for various programs. These information will be useful for us to study the dispersion of CCTVs within Baltimore in comparison to the poverty level in a given area. This dataset contains 55 observations, one percentage for each community statistical area. There seems to only be one NA. The most relevant variables are the following:
CSA2010 = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.
hhpov15 - hhpov19 = each these five column contains the percent of Family Households Living Below the Poverty Line for a given year, from 2015 to 2019.
Shape_Area - Shape_Length = standard fields to determine the area and the perimeter of a polygon
poverty_data <- read.csv(file = here::here("data/Percent_of_Family_Households_Living_Below_the_Poverty_Line.csv"))Source of the data set: [https://arcg.is/1qOrnH]
This dataset provides information about the Community Statistical Area geographies for Baltimore City. Based on aggregations of Census tract (2010) geographies. It will serve as a geographical point of reference for us to match each dataset’s observations together. This dataset contains 55 observations, one for each of area. The most relevant variables are the following:
community = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.
neigh = name of the neighbourhoods contained in the area.
tracts = census tract associated with each neighbourhood. An interactive map of neighborhood statistical areas with census tracts is available online (http://planning.baltimorecity.gov/sites/default/files/Neighborhood%20Statistical%20Areas%20with%20Census%20Tracts.pdf?__cf_chl_captcha_tk__=pmd_5qD.WnCEfWnEa5h1muEPfTVDhN2uheRFagwmglbtKxg-1634299783-0-gqNtZGzNAzujcnBszQO9).
area_data <- read_csv(file = here::here("data/Community_Statistical_Areas__CSAs___Reference_Boundaries.csv"))Source of the data set: [https://data.baltimorecity.gov/datasets/community-statistical-area-1/explore?location=39.284605%2C-76.620550%2C12.26]
Here, the main goal is the transformation of the area data set into a new data set, which contains one observation neighborhood. Indeed, it is important to distinguish neighborhoods which are smaller areas from communities, which are larger and often contain several neighborhoods. We achieve that by first creating a new data set with each neighborhood being assigned to a community using separate_rows and second establishing a new columns with lower case letter for later merge.To do so, we combine the mutate function with tolower which convert the uppercase letters of string to lowercase string.
area_data2 <- separate_rows(area_data, Neigh, sep = ", ") #Creation of a new data set with each neighborhood being assigned to an area
area_data2 <- mutate(area_data2,neigh=tolower(Neigh)) #Creation of new column with lower case lettersAs in the crime data set the neighborhood names are written in lower case letters we again create a column with lower case letters to join the two data sets. We join the area data set and the crime data set using left_join. Next, we use the anti_join function to understand which observation has not matched. The outcome shows all the neighborhoods which did not match. As shown below, the issus mostly come from spelling difference (e.g.: Mount written Mt.). As we have very few observations which do not match, we change the names manually.
crime_data <- mutate(crime_data,neigh=tolower(crime_data$Neighborhood)) #Creation of new column with lower case letters
crime_data_with_areas <- crime_data %>%
left_join(area_data2,by="neigh") #We create a new data sets that contains the name of the area in which the crime was committed
crime_data_NAs <- crime_data %>%
anti_join(area_data2,
by="neigh") #Here is the list of all the NAs we have
unique(crime_data_NAs$neigh) #We see that we have very few unassigned names, we can change this by hand.
crime_data["neigh"][crime_data["neigh"]=="mount washington"] <- "mt. washington"
crime_data["neigh"][crime_data["neigh"]=="carroll - camden industrial area"] <- "caroll-camden industrial area"
crime_data["neigh"][crime_data["neigh"]=="patterson park neighborhood"] <- "patterson park"
crime_data["neigh"][crime_data["neigh"]=="glenham-belhar"] <- "glenham-belford"
crime_data["neigh"][crime_data["neigh"]=="new southwest/mount clare"] <- "hollins market"
crime_data["neigh"][crime_data["neigh"]=="mount winans"] <- "mt. winans"
crime_data["neigh"][crime_data["neigh"]=="rosemont homeowners/tenants"] <- "rosemont"
crime_data["neigh"][crime_data["neigh"]=="broening manor"] <- "o'donnell heights"
crime_data["neigh"][crime_data["neigh"]=="boyd-booth"] <- "booth-boyd"
crime_data["neigh"][crime_data["neigh"]=="lower herring run park"] <- "herring run park"
crime_data["neigh"][crime_data["neigh"]=="mt pleasant park"] <- "mt. pleasant park"
#We got rid of the 764 remaining observations which had no information about neighbourhoodWe get rid of the 764 remaining observations which had no information about neighborhood. This represent a very tiny portion of our total number of observations. Finally, we use the semi join function to create the final data sets which in total is basically the same data set as the original one minus the 764 observations.
Finally, we want to get rid of the observations dating before 2000, as the the Baltimore CCTV program started in the year 2000. We first check the structure of the data set using the str function. We notice that the CrimeDateTime column is not a date. We change that and finally filter the information we want to keep using filter.
crime_data_with_areas <- crime_data %>%
semi_join(area_data2,by="neigh") %>%
left_join(area_data2,by="neigh") #Here we have the final data frame with a community for each crime
str(crime_data_with_areas) # We see that the crime CrimeDateTime column is not a date. We thus convert it.
crime_data_with_areas$CrimeDateTime <- as.Date(crime_data_with_areas$CrimeDateTime)
crime_data_with_areas <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2000-01-01")) #We had 24 observations that dates back to before the year 2000 and 24 observation with no date. We only select crime committed after 2000 as the CCTV program in Baltimore started in 2000.56 areas are included in the standard community statistical area system. However, within these 56 statistical areas is also jail included. For the poverty data however, we obviously have only 55 statistical areas provided, since we obviously do not have data about poverty in jail. To solve this inconsistency, we add a new line. Moreover we needed to fill a missing value for Baltimore in the year 2019: Here we took the average of the past years.
poverty_data <- rbind(poverty_data,list(56,"Unassigned -- Jail",0,0,0,0,0,0,0))
poverty_data[48,7] <- c(poverty_data[48,3],poverty_data[48,4],poverty_data[48,5],poverty_data[48,6]) %>% mean() #The poverty rate of South Baltimore in 19 was missing. This area's rate over the past years seems to be stable (always one of the richest area), that's why we compute the mean of the past 4 years to replace the missing value.This data set seems rather tidy, we will mostly use the first two columns which contain information about the location of each CCTV. Therefore,we still need to make sure to not have any missing values in these two columns. We do so by combination the whichand the is.nafunction and by filtering for potential empty observations.
which(is.na(cctv_data$X))
#> integer(0)
which(is.na(cctv_data$Y))
#> integer(0)
filter(cctv_data, X=="")
#> [1] X Y OBJECTID
#> [4] CAM_NUM NOTES LOCATION
#> [7] PROJ XCOORD YCOORD
#> [10] created_user created_date last_edited_user
#> [13] last_edited_date
#> <0 rows> (or 0-length row.names)
filter(cctv_data, Y=="")
#> [1] X Y OBJECTID
#> [4] CAM_NUM NOTES LOCATION
#> [7] PROJ XCOORD YCOORD
#> [10] created_user created_date last_edited_user
#> [13] last_edited_date
#> <0 rows> (or 0-length row.names)
#We are not sure it is the proper technique but by doing so we ensure that we have no NAs neither empty values and so that our data set is tidy.The original CCTV dataset which we observed had a slight challenge. Although it contained the neighborhood names were not matching the standard neighborhood names. Concludingly, to solve that we involved geospatial counting.
Our procedure included the following steps. After reading the table and converting the data into a data table, we define what will be the coordinates in the datasets. Here we have several types of coordinates, and we use x and Y. Those coordinates files have an special object included called crs. Crs is basically the coordinate system which is used. We continue by defining an object crs.geo1 as being the coordinate system which is being used for all our files. Next, we have the proj4string function, to which we assign this crs.geo1 data.
#read in data table
balt_dat <- fread(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))
#convert to data table
balt_dat <- as.data.table(balt_dat)
#make data spatial
coordinates(balt_dat) <- c("X","Y")
crs.geo1 <- CRS("+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs +type=crs")
proj4string(balt_dat) <- crs.geo1 Then we plot to see the output (as cloud of points which represent all the CCTVs).
plot(balt_dat, pch = 20, col = "steelblue") #We can use the plot function to quickly plot the SpatialPointDataFrame that we created. We see a bunch of points which represent the CCTV location in Baltimore.Next, we have to work with the shapefile which is another special file. Basically it is a set of polygons which represent different areas of the city Baltimore. We downloaded this file on the Open Baltimore Portal, read it in and assign this file again to our crs.geo1 coordinate system. In this way we have assured that our files have the same coordinate system.
#read in shapefile of baltimore
baltimore <- readOGR(dsn = here::here("data/Community_Statistical_Area"), layer = "Community_Statistical_Area") #name of file and object
proj4string(baltimore) <- crs.geo1Again we plot the results.
#plot
plot(baltimore,main="Spread of CCTVs in different communities of Baltimore")
plot(balt_dat,pch=20, col="steelblue" , add=TRUE) #If we plot these two lines together, what we obtain is a map od baltimore, we the 56 community statistical areas and the CCTVs on top of the map.To illustrate these results verbally, we need R to count for us how many CCTV belongs to which area. Here, the function over counts how many CCTVs are layed over a certain polygon frame. Next, we create a new object called counts, make it into a dataframe (so that it is easier for us to work with it) and using the sum(countsFreq) to ensure that we have 56 observations in total. From the results we see that we have 41 observations, so there are only 41 out of 56 areas where there are some CCTV.
#Perform the count
proj4string(balt_dat)
proj4string(baltimore) #To be able to perform the count, we must ensure that the two spatial files have a similar CRS. This is the case as we attributed these two files "crs.geo1"
res2 <- over(balt_dat,baltimore) #This function tells you to which community each CCTV belongs to
counts <- table(res2$community)
counts <- as.data.frame(counts)
colnames(counts)[1] <- "Community"
sum(counts$Freq) #We see that we have 836 observation in total, this is a good sign as our initial CCTV data set contained 836 obesrvationsTo make that workable, we need to create a new CCTV file, from which we just add 0 to each N.A.-location. Lastly, we create a new column with the mutate function to calculate the CCTV-density which shows the amount of CCTV per area divided by the total amount of CCTV.
CCTV_per_area <- area_data[2] %>%
left_join(counts,by="Community") #One must add the communities where there are no counts i.e no CCTV
CCTV_per_area[is.na(CCTV_per_area)] <- 0
CCTV_per_area <- mutate(CCTV_per_area, density_perc=(CCTV_per_area$Freq/(sum(CCTV_per_area$Freq)))*100)Here we use the piping operator to ensure that the community that we have in the Baltimore dataset are the same as the one we are having in the CCTV per are dataset. As this only returns true values that means that it works and is good for further analysis.
library(tmap)
baltimore$community %in% CCTV_per_area$CommunityNext, we perform a left_join between the Baltimore dataset and the CCTV per are. To hedge against the different writing styles (one time it is written with a capital letter and one time with a small letter), we use the vector in the end. Finally, we create the map with the tmap package. The tmap package somehoe works as the ggplot2 package: First, we need to define an element, it always starts with the tm_shape argument, and then you can add with the plus operator the as many argument as you wish. We used the Baltimore-datasets, filled it with the density percentage, defined some breaks, set the borders and the finally the layout.
baltimore@data <- left_join(baltimore@data, CCTV_per_area, by = c('community' = 'Community'))
CCTV_dens_map <- tm_shape(baltimore) + tm_fill(col = "density_perc", title ="CCTV density per Area in %", breaks=c(0,1,2,3,4,5,6,7,8,9,10,11)) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)What we create is the crime_rate_per_area. To achieve that we grouo and summarize the crime data per community which enables us to compute the crime rate per area for each area. Again, we added one more row in the calculations because we have no values for the prison. Again, we ensured us by adding up everything to that it equates 100, which seems to help us go further confidently.
CrimeRatePerArea <- crime_data_with_areas %>%
group_by(Community) %>%
summarize(CrimeRatePerArea=(n()/nrow(crime_data_with_areas))*100)
CrimeRatePerArea <- rbind(CrimeRatePerArea,list("Unassigned -- Jail",0)) #We have no information about crimes committed in jail, yet, the community statistical area encompass 56 area, including jail. In order to ensure consistency, we must add a 56th observation in this data frame.
sum(CrimeRatePerArea$CrimeRatePerArea) #The total sum is 100, which is what we expectAgain, we map the crimes similarily to the section of mapping the CCTVs.
library(tmap)
baltimore$community %in% CrimeRatePerArea$Community #We see that we have a perfect match
baltimore@data <- left_join(baltimore@data, CrimeRatePerArea, by = c('community' = 'Community'))
Crime_map <- tm_shape(baltimore) + tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
Crime_mapAgain, we use the tmap package together with the cartogram ncont function which basically distort the map to show our results. Concretely, we want to show that the crime rate is higher in the city center. This can be shown quite neatly graphically.
Distorted_Crime_map <- tm_shape(cartogram_ncont(baltimore, "CrimeRatePerArea"))+tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.07) #This map distorts the size of each area depending on their respective crime rates. It is interesting as it enables one to see that higher crime rates tends to be concentrated in the city center.
Distorted_Crime_mapFirst thing we do here is to compute the unique values of the description column of the crime date with the area-dataset. We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications. The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have no (?) infractions.
unique(crime_data_with_areas$Description)
#We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications.The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have infractions.
#Misdemeanor:LARCENY FROM AUTO,COMMON ASSAULT, ROBBERY - COMMERCIAL, LARCENY
#Felony: RAPE, ARSON, HOMICIDE, BURGLARY, AUTO THEFT, ROBBERY - CARJACKING, AGG. ASSAULT, ROBBERY - STREET, ROBBERY - RESIDENCE, SHOOTINGNext we create a dataset which is called crime_cat and basically tells you which recorded crime type belongs to which crime category. This dataset will be used later to make a left joint with the crime_data_per_area. Finally, we are left with the crime datasets with the area dataset with a new colums which concerns whether the crime was a felony or a misdemeanor.
crime_cat <- data.frame(Category=c("Misdemeanor","Felony"), Description=c(c("LARCENY FROM AUTO,COMMON ASSAULT,ROBBERY - COMMERCIAL,LARCENY"),c("RAPE,ARSON,HOMICIDE,BURGLARY,AUTO THEFT,ROBBERY - CARJACKING,AGG. ASSAULT,ROBBERY - STREET,ROBBERY - RESIDENCE,SHOOTING")))
crime_cat <- separate_rows(crime_cat, Description, sep = ",")
crime_cat$Description %in% unique(crime_data_with_areas$Description) #Ensure we have a perfect match
crime_data_with_areas <- crime_data_with_areas %>%
left_join(crime_cat,by="Description") #We had a new variable to our crime data setNext, we compute the Crime_PerCategory_PerArea. Here we use the piping operator and this time we group_by the community and category and obtain the results. Again, we check that we indeed have 349482 observations. Moreover, from that we compute some felonystats and misdemeanorstats by (again) adding the prison line into the dataset.
CrimePerCategoryPerArea <- crime_data_with_areas %>%
group_by(Community,Category) %>%
summarize(RepartitionPerCategoryPerArea=n())
sum(CrimePerCategoryPerArea$RepartitionPerCategoryPerArea) #Again, we check that we indeed have 349482 observations
CrimeCategoryRepartition <- CrimePerCategoryPerArea %>%
group_by(Category) %>%
summarise(Repartition=sum(RepartitionPerCategoryPerArea)) #We observe that in Baltimore, the number of felony is close to the number of misdemeanor
FelonyStats <- CrimePerCategoryPerArea %>% filter(Category=="Felony") %>%
mutate(FelonyRatePerArea = (RepartitionPerCategoryPerArea/CrimeCategoryRepartition$Repartition[1])*100)
FelonyStats[56,] <- list("Unassigned -- Jail","Felony",0,0)
MisdemeanorStats <- CrimePerCategoryPerArea %>% filter(Category=="Misdemeanor") %>%
mutate(MisdemeanorRatePerArea = (RepartitionPerCategoryPerArea/CrimeCategoryRepartition$Repartition[2])*100)
MisdemeanorStats[56,] <- list("Unassigned -- Jail","Misdemeanor",0,0)After ensuring that we have a perfect match we perform a left joint and for felony and misdemeanor and map everything.
#Felony
baltimore$community %in% FelonyStats$Community
baltimore@data <- left_join(baltimore@data, FelonyStats, by = c('community' = 'Community'))
Felony_map <- tm_shape(baltimore) + tm_fill(col = "FelonyRatePerArea", title ="Felony rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
Felony_map
#Misdemeanor
baltimore$community %in% MisdemeanorStats$Community
baltimore@data <- left_join(baltimore@data, MisdemeanorStats, by = c('community' = 'Community'))
Misdemeanor_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorRatePerArea", title ="Misdemeanor rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
Misdemeanor_mapThe idea is that we want the information about how crime evolved. Here we could have done a loop, however we have created a dataset for each year. The results are interesting. If we compare how many observations we have in each crime-per year datasets, we see that we have ~40.000ish cases a year exept from 2020 (which is due to COVID) and the year 2021 (which is not finished. We dont make any datasets for the year 2013 and below, because we see that we have not many observations which date prior to the year 2013.
Crime_in_2021 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2021-01-01") & CrimeDateTime <= as.Date("2021-12-31"))
Crime_in_2020 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2020-01-01") & CrimeDateTime <= as.Date("2020-12-31"))
Crime_in_2019 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2019-01-01") & CrimeDateTime <= as.Date("2019-12-31"))
Crime_in_2018 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2018-01-01") & CrimeDateTime <= as.Date("2018-12-31"))
Crime_in_2017 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2017-01-01") & CrimeDateTime <= as.Date("2017-12-31"))
Crime_in_2016 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2016-01-01") & CrimeDateTime <= as.Date("2016-12-31"))
Crime_in_2015 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2015-01-01") & CrimeDateTime <= as.Date("2015-12-31"))
Crime_in_2014 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2014-01-01") & CrimeDateTime <= as.Date("2014-12-31"))
crime_data_with_areas %>% filter(CrimeDateTime < as.Date("2014-01-01")) #We see that we have very few (76) observations before 2014, thus we do not consider themNext, we calculate the crime rates for each year with the piping operator, grouping by community and summarize the rates. In the end we create the crime evolution datasets which is a combination of all the data.
#_____ Calculations of the crime rates
CrimeRatePerArea2021 <- Crime_in_2021 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2021=(n()/nrow(Crime_in_2021))*100)
CrimeRatePerArea2021 <- rbind(CrimeRatePerArea2021,list("Unassigned -- Jail",0))
CrimeRatePerArea2020 <- Crime_in_2020 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2020=(n()/nrow(Crime_in_2020))*100)
CrimeRatePerArea2020 <- rbind(CrimeRatePerArea2020,list("Unassigned -- Jail",0))
CrimeRatePerArea2019 <- Crime_in_2019 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2019=(n()/nrow(Crime_in_2019))*100)
CrimeRatePerArea2019 <- rbind(CrimeRatePerArea2019,list("Unassigned -- Jail",0))
CrimeRatePerArea2018 <- Crime_in_2018 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2018=(n()/nrow(Crime_in_2018))*100)
CrimeRatePerArea2018 <- rbind(CrimeRatePerArea2018,list("Unassigned -- Jail",0))
CrimeRatePerArea2017 <- Crime_in_2017 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2017=(n()/nrow(Crime_in_2017))*100)
CrimeRatePerArea2017 <- rbind(CrimeRatePerArea2017,list("Unassigned -- Jail",0))
CrimeRatePerArea2016 <- Crime_in_2016 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2016=(n()/nrow(Crime_in_2016))*100)
CrimeRatePerArea2016 <- rbind(CrimeRatePerArea2016,list("Unassigned -- Jail",0))
CrimeRatePerArea2015 <- Crime_in_2015 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2015=(n()/nrow(Crime_in_2015))*100)
CrimeRatePerArea2015 <- rbind(CrimeRatePerArea2015,list("Unassigned -- Jail",0))
CrimeRatePerArea2014 <- Crime_in_2014 %>%
group_by(Community) %>%
summarize(CrimeRatePerArea2014=(n()/nrow(Crime_in_2014))*100)
CrimeRatePerArea2014 <- rbind(CrimeRatePerArea2014,list("Unassigned -- Jail",0))
crime_evolution <- CrimeRatePerArea2021 %>%
left_join(CrimeRatePerArea2020,by="Community") %>%
left_join(CrimeRatePerArea2019,by="Community") %>%
left_join(CrimeRatePerArea2018,by="Community") %>%
left_join(CrimeRatePerArea2017,by="Community") %>%
left_join(CrimeRatePerArea2016,by="Community") %>%
left_join(CrimeRatePerArea2015,by="Community") %>%
left_join(CrimeRatePerArea2014,by="Community")First, we create a CCTV_VS_crimes dataset (which basically is a left joint). Next, we are able to plot visually compare the crime rate per area with the density percentage per area. Here we see that there seems to be a trend.
CCTV_VS_crimes <- CCTV_per_area %>%
left_join(CrimeRatePerArea,by="Community")
View(CCTV_VS_crimes)
plot(CCTV_VS_crimes$CrimeRatePerArea,CCTV_VS_crimes$density_perc, main="Crime Rate per Community VS CCTV Density per Community",xlab="CrimeRatePerCommunity",ylab="CCTVDensityPerCommunity")
regression <- lm(CCTV_VS_crimes$density_perc~CCTV_VS_crimes$CrimeRatePerArea)
summary(regression)
y<-regression[["coefficients"]][["(Intercept)"]]
x<-regression[["coefficients"]][["CCTV_VS_crimes$CrimeRatePerArea"]]
range <- seq(from=0, to=4.5, by=0.1)
estimation <- x*range+y
lines(range,estimation, col="blue")In order to confirm that perform a regression with the lm-function and call for a summary of the function. Next, the x and y are computed, which are basically the coefficients. Next, we use a trick by creating a range from 0 to 4.5 (because the plot goes ~ from zero to 4.5.) and create a vector. This vector is called estimation. This vector is basically the linear function. So this is the coefficient multiplied by each value in the range plus the intercept. As a result we get the fitted value. The fitted value contains the estimation. After taht we plot the estimation.
In the summary of the regression we see that \(R^2\) (which is the estimator of the goodness of the fit) is pretty poor but still there seems to be a tendency.
In these section we engage with the mapping of the CCTVs and crimes. The method is the same as before with the tmap-package. However, this time we have two different shapes in tm_shape(Baltimore) and tm_shape(balt_dat) which adds the maps togehter (as in ggplot) and over each other. If we take a look at this data we see that it gives an intuition about the data. It seems as if where crime rates are the lowest, there seems to be less CCTVs (for instance in the north area of the city or even in the western CCTVs). There seems to be a correlation between the dark red areas and the CCTV density per area.
Crime_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)+ tm_shape(balt_dat) + tm_dots(col="black")
Felony_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "FelonyRatePerArea", title ="Felony rate per Area in %", style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")
Misdemeanor_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorRatePerArea", title ="Misdemeanor rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")
tmap_mode("view") #Use this command to have interactive maps
baltimore@data[["fid"]]<-baltimore@data[["community"]] #We do that so that we see the name of the Community when using an interactive map
tmap_arrange(Crime_and_CCTV_map,Felony_and_CCTV_map,Misdemeanor_and_CCTV_map)Here we sorted the crime rate per area to find out that the range was. Either using breaks or the automatic style with the quantiles.
sort(baltimore@data[["CrimeRatePerArea"]])
breaks1 <- c(0,0.5,1,1.5,2,2.5,3,3.5,4,4.5) #Not sure what break to use, for the moment I decided to use the automatic break system with the "quantile" parameter
tmap_mode("plot") #We go back to classic plottingWe are trying to see whether the presence of CCTV can deter crime. Here we first try to answer the question where the crime took place (especially in August 2021). We choose AUgust 2021 because it is the latest full month which we have in our dataset. Taking the latest timepoints from the data assures us that most of the CCTVs presented in the dataset were already there (since we have no information of when exactly these CCTVs were added). Again, as before, we create a data table, assign coordinates, define CRS (in this case the CRS is “EPDS4326”, which we needed to transform).Again, mapping with tm_shape to get the results. The output shows where crime takes place compared to the CCTV location.
crime_spatial <- as.data.table(crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31")))
coordinates(crime_spatial) <- c("Longitude","Latitude")
proj4string(crime_spatial) <- CRS("+init=epsg:4326")
crime_spatial <- spTransform(crime_spatial,crs.geo1)
August21Crimes_VS_CCTV <- tm_shape(baltimore) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.1, title="Crimes committed in August 2021 VS CCTV location",frame.lwd = 5)+ tm_shape(balt_dat) + tm_dots(col="black")+tm_shape(crime_spatial)+tm_dots(col="red",alpha=0.5)
#It could be interesting to see where crime took place relative to CCTV locations in the area with the highest crime rate in August 2021
tmap_mode("view") #Use this command to have interactive maps
August21Crimes_VS_CCTV